- AI enables researchers to better define mental illness subtypes and understand patient symptoms.
- The technology may one day help guide psychiatry diagnosis and patient care.
- Addressing ethical concerns surrounding AI in psychiatry may encourage clinicians to adopt the technology.
- This article is part of the "Healthcare Innovation" series, highlighting what healthcare professionals need to do to meet this technology moment.
Psychiatry researchers are using artificial intelligence to develop a better understanding of mental illness, with the goal of creating more effective and personalized treatment plans.
"Psychiatry is a unique field because mental healthcare providers generally don't have specific biomarkers or clear imaging findings indicating mental health pathology to make a diagnosis," said Dr. Ellen Lee, assistant professor of psychiatry, University of California San Diego, and staff psychiatrist, VA San Diego Healthcare System.
Instead, practitioners largely rely on the patient's self-reported symptoms and medical history, she said. To further complicate matters, patients with the same diagnosis may have different symptoms.
Artificial intelligence is helping researchers to assess the heterogeneity of psychiatric conditions more fully, said Dr. Charles Marmar, Lucius N. Littauer psychiatry professor and Department of Psychiatry chair at NYU Grossman School of Medicine. "It can help us determine whether there is one kind of depression or seven kinds of depression."
The technology can sift through data about varying patient behaviors, medical, social, and family histories, differing responses to prior treatments, and information acquired through new forms of monitoring - such as wearable technology - to help fine-tune decisions about care, Lee said.
"AI in psychiatry has been a real revolution in many ways," Lee said. However, clinicians may need some time to adapt to the idea of relying on this technology in the clinic. As AI becomes more available, participating in educational opportunities that address how algorithms enhance clinical decision-making and what data they contain can help with the transition process.
Refining diagnosis and patient monitoring
One goal of AI research is to hone patient diagnoses into different condition subgroups so doctors can personalize treatment. "It's really about precision medicine," Marmar said.
For example, he and his colleagues have been using machine learning - a form of AI that employs computer algorithms and decision rules to analyze and classify large amounts of data - to evaluate the heterogeneity of psychiatric illnesses. The more data these algorithms process, the more accurate they become.
They recently published a study in Translational Psychiatry using machine learning to identify two forms of post-traumatic stress disorder in veterans, a mild form with relatively few symptoms and a chronic, severe form in which patients experienced high levels of depression and anxiety.
Using machine learning, Marmar plans to further explore and validate a possible five PTSD subtypes, including anxious and dissociative, depressed, cognitive functioning impaired, mild, and severe. The research team will also look at molecular and brain circuit markers, genes, and gene products such as proteins and metabolites to see how they are associated with each form of PTSD. "In the end, we hope to have a diagnostic blood test for each subtype," he said.
Another avenue in AI research is incorporating sensor data through wearable health technologies such as a Fitbit to help determine how to better manage patients, said Lee. For example, mental healthcare providers can ask patients to report on how they've been sleeping for the last month or they can assess data from technology that monitors sleep patterns.
"There's a real disconnect between how people perceive they sleep versus how they actually sleep," Lee said. Sleep monitoring technology combined with AI may be much more objective. Accurately tracking sleep patterns could give providers an indication of which patients with bipolar disorder might be more at risk of experiencing an episode of mania, allowing for adjustments in medication, she said.
Physician training and ethical concerns
Mental-health professionals will likely need to shift from patient self-reports and caretaker reports to incorporating AI technology in their clinical decision-making, Lee said.
"There are opportunities to learn about AI for physicians who are interested," she said, adding that annual psychiatry conferences and continuing medical education events are trying to make AI more accessible to clinical audiences.
NYU Grossman School of Medicine also offers AI and machine learning courses to medical students interested in radiology. Those who are also studying psychiatry can focus on neuroradiology within this program.
Psychiatrists need to be informed about what types of data are being used to develop AI algorithms to be comfortable with using the technology, Lee said. Decisions made in clinical practice can have serious repercussions in a patient's life, such as hospitalization or the loss of capacity to live independently. Physicians may be hesitant to hand this power over to an AI algorithm, Lee said.
"AI algorithms and models are only as good as the data they're built on," Lee said. "We need data built on diverse types of patients, from different racial, ethnic sociodemographic backgrounds."